Skip to content

[feature] [sub feature 1] Support model judge in evaluation#149

Open
SJTUyh wants to merge 4 commits intoAISBench:masterfrom
SJTUyh:dev
Open

[feature] [sub feature 1] Support model judge in evaluation#149
SJTUyh wants to merge 4 commits intoAISBench:masterfrom
SJTUyh:dev

Conversation

@SJTUyh
Copy link
Collaborator

@SJTUyh SJTUyh commented Feb 13, 2026

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@gemini-code-assist
Copy link

Summary of Changes

Hello @SJTUyh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the evaluation pipeline by introducing a robust framework for model judging using Large Language Models. It allows for a new stage in the workflow where an LLM can assess the correctness of another model's predictions, providing a more nuanced and automated evaluation capability. The changes include new worker classes, dataset types, and utility functions to support this new judging mechanism, making the evaluation process more flexible and powerful.

Highlights

  • New Model Judging Workflow: Introduced a new JudgeInfer worker class and integrated it into the all and eval workflows, enabling the use of a Large Language Model (LLM) as a judge for model predictions.
  • LLM Judge Dataset Support: Added BaseJDGDataset and LLMJudgeDataset to provide a framework for handling datasets specifically designed for LLM judging, including loading predictions and preparing data for evaluation.
  • AIME2025 LLM Judge Configuration: Provided a concrete example of LLM judge integration with a new configuration file for the AIME2025 dataset, demonstrating how to set up a judge model, prompt templates, and evaluators.
  • Efficient JSONL Handling: Implemented load_jsonl and dump_jsonl utilities using mmap and orjson for faster and more memory-efficient reading and writing of JSONL files, which are crucial for handling large prediction datasets.
  • LLM Judge Evaluation: Added LLMJudgeCorrectEvaluator and a get_a_or_b text postprocessor to specifically evaluate the binary (A/B) output of an LLM judge.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • ais_bench/benchmark/cli/workers.py
    • Added os, shutil, PredictionInvalidException, TMAN_CODES, load_jsonl, and dump_jsonl imports.
    • Introduced the JudgeInfer class, which handles the inference process for judge models, including configuration updates, task partitioning, dataset merging, and result post-processing.
    • Modified the Eval class to include a _result_post_process method for handling judge results and updating task configurations.
    • Added a _cfg_pre_process method to the Summarize class to adjust dataset abbreviations for judge inference.
    • Updated the WORK_FLOW dictionary to include JudgeInfer in the all and eval modes.
  • ais_bench/benchmark/configs/datasets/aime2025/aime2025_gen_0_shot_llmjudge.py
    • Added a new configuration file for AIME2025 dataset with LLM judging support.
    • Defined aime2025_reader_cfg, aime2025_infer_cfg, GRADER_TEMPLATE, aime2025_judge_infer_cfg, and aime2025_eval_cfg.
  • ais_bench/benchmark/datasets/aime2025.py
    • Imported LLMJudgeDataset from ais_bench.benchmark.datasets.utils.llm_judge.
    • Changed import path for BaseDataset to ais_bench.benchmark.datasets.base.
    • Added Aime2025JDGDataset class, inheriting from LLMJudgeDataset, to specify the base dataset class for AIME2025 judging.
  • ais_bench/benchmark/datasets/base.py
    • Added Type to the typing imports.
    • Introduced BaseJDGDataset as an abstract base class for judge datasets, providing methods for initialization, loading predictions, and managing original dataset instances.
  • ais_bench/benchmark/datasets/utils/datasets.py
    • Removed trailing whitespace from docstrings in get_sample_data and anls_compute functions.
  • ais_bench/benchmark/datasets/utils/llm_judge.py
    • Added a new file to define LLM judge utilities.
    • Implemented get_a_or_b text postprocessor to extract 'A' or 'B' from model predictions.
    • Defined LLMJudgeDataset class, extending BaseJDGDataset, for loading and processing predictions for LLM judging.
    • Created LLMJudgeCorrectEvaluator to score LLM judge outputs based on 'A' (correct) or 'B' (incorrect).
  • ais_bench/benchmark/utils/file/file.py
    • Imported mmap and orjson for optimized file operations.
    • Added load_jsonl function for efficient reading of JSONL files using mmap and orjson.
    • Added dump_jsonl function for efficient writing of JSONL files using orjson.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@SJTUyh SJTUyh changed the title [feature] Support model judge in evaluation [feature] [sub feature 1] Support model judge in evaluation Feb 13, 2026
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new JudgeInfer worker to support model-judged evaluation, enhancing the evaluation workflow. However, it introduces potential security vulnerabilities, including prompt injection in the LLM judge templates and path traversal in file operations within the worker classes. Model and dataset abbreviations should be sanitized before being used in file paths, and LLM prompts should be designed to be resilient against injection from untrusted model outputs. Additionally, there are issues related to prompt consistency, error handling, and implementation robustness that need to be addressed.


def load(self, predictions_path: str, **kwargs):

dataset_content = self.dataset_instance.dataset["test"]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This line will raise a TypeError if self.dataset_instance.dataset is a datasets.Dataset object, as it is not subscriptable with a string key like "test". The load method of Aime2025Dataset, which is used to create self.dataset_instance, returns a Dataset object, not a DatasetDict.

The logic should be changed to handle both Dataset and DatasetDict correctly. You should get the dataset object first and then check its type.

Suggested change
dataset_content = self.dataset_instance.dataset["test"]
dataset_content = self.dataset_instance.dataset

Comment on lines +176 to +182
key = (
task["models"][0]["abbr"] # same model
+ "_"
+ str(task['datasets'][0][0]['type']) # same dataset type
+ "_"
+ str(task["datasets"][0][0]["infer_cfg"]["inferencer"]) # same inferencer with the same args
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using str() on a dictionary to generate a grouping key is unreliable. The string representation of a dictionary can vary depending on key insertion order (in older Python versions) or how it's constructed. This could lead to tasks not being merged correctly when they should be.

To create a stable and canonical representation of the dictionary, I recommend using json.dumps with sort_keys=True. Using a tuple for the key is also generally safer than string concatenation.

            key = (
                task["models"][0]["abbr"],  # same model
                str(task['datasets'][0][0]['type']),  # same dataset type
                json.dumps(task["datasets"][0][0]["infer_cfg"]["inferencer"], sort_keys=True)  # same inferencer with the same args
            )

Comment on lines +16 to +17
match = re.search(r'[AB]', pred[-1:])
return match.group(0) if match else 'B'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Extracting the judge's decision by only looking at the last character (pred[-1:]) is very brittle. LLMs may not always follow formatting instructions perfectly and might add extra characters, newlines, or punctuation.

Consider using a more robust regular expression to find the first occurrence of 'A' or 'B' as a standalone character in the model's output. This will make the parsing much more reliable.

Suggested change
match = re.search(r'[AB]', pred[-1:])
return match.group(0) if match else 'B'
match = re.search(r'\b[AB]\b', pred)
return match.group(0) if match else 'B'

Comment on lines +31 to +32
preds = load_jsonl(prediction_path)
preds.sort(key=lambda x: x.get('id',0))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

If prediction_path does not exist, the preds variable is never initialized. This will cause an UnboundLocalError on the next line when preds.sort() is called.

To fix this, initialize preds to an empty list before the if block.

Suggested change
preds = load_jsonl(prediction_path)
preds.sort(key=lambda x: x.get('id',0))
preds = []
if os.path.exists(prediction_path):
preds = load_jsonl(prediction_path)
preds.sort(key=lambda x: x.get('id', 0))


# update judge cfgs to model cfgs and data
for task in tasks:
task["datasets"][0][0]["predictions_path"] = osp.join(cfg.judge_infer.partitioner.out_dir, task["models"][0]["abbr"], f'{task["datasets"][0][0]["abbr"]}.jsonl')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

File paths are constructed using abbr values (model and dataset abbreviations) taken directly from the configuration without sanitization. This allows for path traversal attacks where an attacker providing a malicious configuration could cause the application to read, write, or delete arbitrary files on the system. For example, if a model abbreviation is set to ../../etc, the application might attempt to operate on files outside the intended output directory.

summarizer = build_from_cfg(summarizer_cfg)
summarizer.summarize(time_str=self.args.cfg_time_str)

def _cfg_pre_process(self, cfg: ConfigDict) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function's type hint specifies a return type of None, but the function actually returns cfg. The calling code in do_work also uses this return value. Please update the type hint to match the implementation and usage.

    def _cfg_pre_process(self, cfg: ConfigDict) -> ConfigDict:


dataset_content = self.dataset_instance.dataset["test"]

# 加载被测模型的推理结果(排序后)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment is in Chinese. To maintain consistency and improve readability for all contributors, please write comments in English.

Suggested change
# 加载被测模型的推理结果(排序后)
# Load the inference results of the model under test (sorted)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments